9 research outputs found

    Off-line evaluation of indoor positioning systems in different scenarios: the experiences from IPIN 2020 competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.Track 3 organizers were supported by the European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska Curie Grant 813278 (A-WEAR: A network for dynamic WEarable Applications with pRivacy constraints), MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE), INSIGNIA (MICINN ref. PTQ2018-009981), and REPNIN+ (MICINN, ref. TEC2017-90808-REDT). We would like to thanks the UJI’s Library managers and employees for their support while collecting the required datasets for Track 3. Track 5 organizers were supported by JST-OPERA Program, Japan, under Grant JPMJOP1612. Track 7 organizers were supported by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II. ” Team UMinho (Track 3) was supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope under Grant UIDB/00319/2020, and the Ph.D. Fellowship under Grant PD/BD/137401/2018. Team YAI (Track 3) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 109-2221-E-197-026. Team Indora (Track 3) was supported in part by the Slovak Grant Agency, Ministry of Education and Academy of Science, Slovakia, under Grant 1/0177/21, and in part by the Slovak Research and Development Agency under Contract APVV-15-0091. Team TJU (Track 3) was supported in part by the National Natural Science Foundation of China under Grant 61771338 and in part by the Tianjin Research Funding under Grant 18ZXRHSY00190. Team Next-Newbie Reckoners (Track 3) were supported by the Singapore Government through the Industry Alignment Fund—Industry Collaboration Projects Grant. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU). Team KawaguchiLab (Track 5) was supported by JSPS KAKENHI under Grant JP17H01762. Team WHU&AutoNavi (Track 6) was supported by the National Key Research and Development Program of China under Grant 2016YFB0502202. Team YAI (Tracks 6 and 7) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 110-2634-F-155-001

    Off-Line Evaluation of Indoor Positioning Systems in Different Scenarios: The Experiences From IPIN 2020 Competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.Track 3 organizers were supported by the European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska Curie Grant 813278 (A-WEAR: A network for dynamic WEarable Applications with pRivacy constraints), MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE), INSIGNIA (MICINN ref. PTQ2018-009981), and REPNIN+ (MICINN, ref. TEC2017-90808-REDT). We would like to thanks the UJI’s Library managers and employees for their support while collecting the required datasets for Track 3. Track 5 organizers were supported by JST-OPERA Program, Japan, under Grant JPMJOP1612. Track 7 organizers were supported by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II. ” Team UMinho (Track 3) was supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope under Grant UIDB/00319/2020, and the Ph.D. Fellowship under Grant PD/BD/137401/2018. Team YAI (Track 3) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 109-2221-E-197-026. Team Indora (Track 3) was supported in part by the Slovak Grant Agency, Ministry of Education and Academy of Science, Slovakia, under Grant 1/0177/21, and in part by the Slovak Research and Development Agency under Contract APVV-15-0091. Team TJU (Track 3) was supported in part by the National Natural Science Foundation of China under Grant 61771338 and in part by the Tianjin Research Funding under Grant 18ZXRHSY00190. Team Next-Newbie Reckoners (Track 3) were supported by the Singapore Government through the Industry Alignment Fund—Industry Collaboration Projects Grant. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU). Team KawaguchiLab (Track 5) was supported by JSPS KAKENHI under Grant JP17H01762. Team WHU&AutoNavi (Track 6) was supported by the National Key Research and Development Program of China under Grant 2016YFB0502202. Team YAI (Tracks 6 and 7) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 110-2634-F-155-001.Peer reviewe

    Multi-sensors for realization of home tele-rehabilitation

    No full text
    Research in assistive healthcare, in particular home rehabilitation, has spawn huge potential owing to the recent advancement of internet-of-things technology and the wearable hardware, Inertial Measurement Unit (IMU) in wearable sensors and smartphones become a affordable for community usage. However, using low cost IMU sensors or smartphones face certain challenges, such as accurate orientation estimation for lower-limb motion tracking, which is usually less of a problem in specialized motion tracking sensor devices. To address these issues, the candidate has made three main contributions: a new and better orientation estimation algorithm which combines quaternion-based Kalman filter with corrector estimates using gradient descent (KFGD), an auto-detector of post-filtered lower-limb orientation signal oscillation and the machine-learning based state identification of rehabilitation exercise. Firstly, obtaining accurate orientation readings with noise-prone IMU and post-processing drift is a key challenge in motion tracking research. It is the result of accumulated errors over the integration of the gyroscope signal to calculate the angular displacement, in other words, the orientation of the limb, in the motion tracking application. Thus, the candidate proposes two sensor fusion algorithms: the complementary filter feedback (CFF) and the quaternion-based Kalman filter with corrector estimates using gradient descent (KFGD). The complementary filter feedback (CFF) focuses on the components’ performance of high-pass filter (from angular velocity) and low-pass filter (from fusion of gravity and earth magnetic field). These components contribute to the estimated orientation while the proposed feedback loop can correct the drift. KFGD is later introduced to further improve the limitation of the low-pass filter and the fixed fusion threshold of the CFF. Gradient descent method and quaternion-based Kalman Filter are chosen for their progressive features. The performance was evaluated on the case study of early stage rehabilitation exercises, namely, leg extension and sit-to-stand. The result shows that CFF is capable of fast motion tracking and confirms that the feedback loop is capable of correcting errors caused by integration of gyroscope data. KFGD outperforms the state-of-the-art Madgwick algorithm and is recommended for obtaining accurate orientation readings using motion sensors. Secondly, upon observing the characteristics of the post-filtered orientation signals of the lower-limb, a noticeable artifact in the output signal that it would oscillate from positive to negative and vice versa. To address the oscillations in the signals of both motion capture and inertial measurement sensors, the candidate applied machine learning algorithms and compared them with the rule-based approach. Machine learning methods, such as Logistic Regression, Support Vector Machine and Multilayer perceptron, were adopted in order to automatically detect the oscillation. The results showed that machine learning methods are able to learn the oscillation patterns in wearable sensor data and identify the tendency of fluctuation thereby allowing the errors to be filtered out more efficiently than rule-based method. Lastly, in order to realize meaningful home rehabilitation, there is a need for informative feedback or intervention in parallel with the exercise monitoring. The study aims to use the collected data and the understanding of wearable signal to simulate the high-level observations by the physiotherapist towards the patients and provide informative feedback during exercising at home. Therefore, the candidate proposes the study on machine-learning based state identification of rehabilitation exercise by using wearable sensors on the lower limbs. The informative feedback and quality assessment could be obtained by selectively segmenting the exercise into four states: rest, raise, hold and drop. The segmentation potentially increases the frequency of detection resulting in almost real-time feedback. In addition, identifying the abnormal sequences against the correct pattern in the respective state results in more specific and informative feedback. In this work, the candidate analyses the impact and derives valuable insights of the extracted sensor signals in relation to the predicted. As a result, the predictive model yields up to 95.89% (SVM) and 94.04% (SVM) accuracy for binary and multi-label pattern recognition respectively. The experiment and recommended framework show the efficiency and potential of using signal data as features in motion-based exercise pattern recognition. The work presented in this thesis demonstrates the realization of home rehabilitation from the hardware-level to the simulation of user intervention. The methodologies exploit the a ordable hardware to correctly track the limb motion while the motion signal prediction model and analysis boost the potential of intervention strategy for the user’s home exercise feedback.Doctor of Philosoph

    Analysis of mood using video

    No full text
    Human uses communications to express their state of mood, usually nonverbal communication like hand gesture, facial expression, tone of voice. However, face is our primitive focus of attention that plays an important role to identify our emotion. Understanding state of mood is beneficial when computer could monitor humans mood and adapt is human-computer interaction for more user-friendly environment. Analysis of Mood using video is an application which read human facial expression and makes a guess of what the user is feeling. We stick to the 6 basic emotion proposed by Ekman (1972): Anger, Disgust, Fear, Happy, Sad, and Surprise. This project will include exploration and building the model to address the accuracy challenge in terms of face feature detection, and classification using Support Vector Machine. In conclusion, the project was successfully designed and implemented from ground-zero to full functionality. We have achieved more than the project was the original goal such that to complete C++ desktop application with facial feature detection, dataset generation, data classification, and lastly mood prediction. In addition, Android application was implemented using Java as the enhancement. It is to reuse the existing native C++ libraries and code while switching platform. This mobile application obtained JNI framework as well as Android NDK that could wrap the previously used native libraries. Another highlight was that simple music function was completed and added into Android application. It served as music therapy which changes its song corresponding to its user's mood.Bachelor of Engineering (Computer Science

    SingTRACeX: Navigation System to Address Wandering Behavior for Elders and Their Caregivers

    Get PDF
    The issue of an ever-increasing ageing population has been the increasing burden on caregivers to care for the elderly population. Caring for elders, especially those diagnosed with dementia, can be challenging. People living with dementia (PWD) require extra care and attention from the caregivers due to the associated behaviours that come with dementia. Wandering is a frequent behaviour exhibited by PWD, which can bring about negative outcomes on the PWD as well as increasing the stress of the caregivers. Though many technological solutions exist, they are not widely deployed. This paper introduces a technological framework, bridging the localisation technologies to the needs of elders and caregivers. The aim is to minimise or eliminate the negative outcomes of dementia wandering and to reduce the burden and stress on the caregivers, thus improving overall well-being. In this paper, we study the application, SingTRACeX, features by considering user needs from the field study with 2 focus group discussions (FGD), comprising of 14 professional caregivers and coordinators. The proposed system features Real-time Location Tracking and Indoor Localisation. The location is determined by GPS location from the Sensor module when outdoors, and estimation using data from the WiFi module, and Bluetooth module when indoors. The indoor navigation provided by the Indoor Localisation module uses an A-star search algorithm. This paper could serve as a foundation that can be built upon over time as the needs of elders and caregivers may change over time, as well as the evolution of technology that may bring about new methods to address needs

    extendGAN+: Transferable Data Augmentation Framework Using WGAN-GP for Data-Driven Indoor Localisation Model

    No full text
    For indoor localisation, a challenge in data-driven localisation is to ensure sufficient data to train the prediction model to produce a good accuracy. However, for WiFi-based data collection, human effort is still required to capture a large amount of data as the representation Received Signal Strength (RSS) could easily be affected by obstacles and other factors. In this paper, we propose an extendGAN+ pipeline that leverages up-sampling with the Dirichlet distribution to improve location prediction accuracy with small sample sizes, applies transferred WGAN-GP for synthetic data generation, and ensures data quality with a filtering module. The results highlight the effectiveness of the proposed data augmentation method not only by localisation performance but also showcase the variety of RSS patterns it could produce. Benchmarking against the baseline methods such as fingerprint, random forest, and its base dataset with localisation models, extendGAN+ shows improvements of up to 23.47%, 25.35%, and 18.88% respectively. Furthermore, compared to existing GAN+ methods, it reduces training time by a factor of four due to transfer learning and improves performance by 10.13%

    Smartphone orientation estimation algorithm combining Kalman filter with gradient descent

    No full text
    Availability and all-in-one functionality of smartphones have become a multipurpose personal tool to improve our daily life. Recent advancements in hardware and accessibility of smartphones have spawn huge potential for assistive healthcare, in particular telerehabilitation. However, using smartphone sensors face certain challenges, in particular, accurate orientation estimation, which is usually less of a problem in specialized motion tracking sensor devices. Drift is one of the challenges. We first propose a simple feedback loop complementary filter (CFF) to reduce the error caused by the integration of the gyroscope's data in the orientation estimation. Next, we propose a new and better orientation estimation algorithm which combines quaternion-based kalman filter with corrector estimates using gradient descent (KFGD). We then evaluate CFF's and KFGD's performance on two early-stage rehabilitation exercises. The results show that CFF is capable of fast motion tracking and confirm that the feedback loop can correct the error caused by the integration of gyroscope data. The KFGD orientation estimation is comparable to XSENS Awinda and has shown itself to be stable than and outperforms CFF. KFGD also outperforms the prominent Madgwick algorithm using mobile data. Thus, KFGD is suitable for low-cost motion sensors or mobile inertial sensors, especially during early recovery stage of sport injuries and exercise for the elderly

    Off-line evaluation of indoor positioning systems in different scenarios: the experiences from IPIN 2020 competition

    No full text
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements
    corecore